VQ VAE


Vector-quantized variational autoencoder (VQ VAE) is a generative model that uses vector quantization to learn discrete latent representations.

VAR-3D: View-aware Auto-Regressive Model for Text-to-3D Generation via a 3D Tokenizer

Add code
Feb 14, 2026
Viaarxiv icon

EXCODER: EXplainable Classification Of DiscretE time series Representations

Add code
Feb 13, 2026
Viaarxiv icon

TLC-Plan: A Two-Level Codebook Based Network for End-to-End Vector Floorplan Generation

Add code
Feb 06, 2026
Viaarxiv icon

Vector Quantized Latent Concepts: A Scalable Alternative to Clustering-Based Concept Discovery

Add code
Feb 02, 2026
Viaarxiv icon

Is Hierarchical Quantization Essential for Optimal Reconstruction?

Add code
Jan 29, 2026
Viaarxiv icon

ConLA: Contrastive Latent Action Learning from Human Videos for Robotic Manipulation

Add code
Jan 31, 2026
Viaarxiv icon

iFSQ: Improving FSQ for Image Generation with 1 Line of Code

Add code
Jan 27, 2026
Viaarxiv icon

VQ-Style: Disentangling Style and Content in Motion with Residual Quantized Representations

Add code
Feb 02, 2026
Viaarxiv icon

Class-Partitioned VQ-VAE and Latent Flow Matching for Point Cloud Scene Generation

Add code
Jan 18, 2026
Viaarxiv icon

TimeMar: Multi-Scale Autoregressive Modeling for Unconditional Time Series Generation

Add code
Jan 16, 2026
Viaarxiv icon